Web Player Animation Pipeline and Art Upgrade

When I joined the company, I adopted an existing infrastructure. The pipeline for playing CG content was based in Unity, handled through a codebase that sat on top, and was connected to a Web based node graph. The benefit of this is it enabled non-technical content writers to create their own CG content - but the negative is that it was locked into its original design.

Here are some of the changes I made to the animation pipeline, in order to make it easier to upgrade and add content to this medium:

Jali Migration

When I joined - the facial animation was hand keyed by a team of animators using Lip Sync Pro, in Unity. The results took a long time to produce and the quality wasn't great. This was one of the first things we tackled.

I setup our character rigs to work with Jali - a Maya based facial animation tool which is driven by audio + text input, and trained our animators in how to use it. After that, I worked with our Engineering department to implement backend changes to accept this new animation system, and made necessary changes to our character's animation controllers.

The impact was that what used to take weeks of work per project became days.

Generic Rig

The next major imitative was to consolidate all characters onto a single rig, and upgrade the art.

The way the system was designed, was that there were 8 characters. Each had their own unique rig, and each of the 32 animations had a unique animation made by a team of animators to achieve minor permutations. For example, there were 4 variants for mood, 2 variants for sitting / standing, 8 variants for character. So to add a new animation, you had to create 64 animations.

I went in and reskinned all characters to one rig, adding move joints under the major facial landmarks (eyes, nose and mouth) to allow for facial retargeting. (Animation did not apply to the move joint, allowing the artist to shift the position of those joint groups to match the facial features of that character).

With that accomplished, the next step was to create an animation controller which imitated the way the system was designed, but broke each animation into its own layer and blended them together.

Keeping to the "action / idle" sequence (as that is how the web interface was designed) I left the base layer the same. The idle represented a looping standing idle animation, and the action a gesture.

The next layer up we earmarked for facial animation loaded through Jali. Eye layers and Lip Sync were legacy, and we kept those in for backward compatibility.

The leg layer gets called at the beginning of the sequence, and uses an avatar mask to override the legs and lower spine, dictating if the character is sitting or standing.

The arms, finally control the pose the arms return to during the idle sequence. This imitated how the previous library conveyed mood - IE you wave your hands during action, and then your arms return to a crossed position - to indicate anger, or to the lap to indicate appeasement.

In a more ideal world I would probably have integrated Biped IK - which is a Unity based IK tool - but we had to do our best to not impact the larger tech infrastructure and that would have required much bigger backend adjustments.

Finally I created documentation for our animators so they understood the "rules of the road" for upgrading animations in this new framework, and worked with our CG Artist to define a path forward on art upgrades.

We decided to go with a photogrammetry port, so that clients could "purchase" a new character look relatively quick, while keeping an art style that looked good on very low performing mobile headsets, and went with renderpeople.com, due to their licensing and variety options.

Our long term strategy is to completely automate the facial animation for the web player by migrating our facial animation to be handled by a combo of OVR for the Unity scene, and Salsa for the web player.

Previous
Previous

Unreal Video Production Pipeline

Next
Next

Infrastructure